Square Deal: Lower Bounds and Improved Convex Relaxations for Tensor Recovery
نویسندگان
چکیده
Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing andmachine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms (SNN) of the unfolding matrices of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a Kway n⇥n⇥· · ·⇥n tensor of Tucker rank (r, r, . . . , r) from Gaussian measurements requires ⌦(rnK 1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(rK +nrK) observations. We introduce a simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. The lower bound for the SNNmodel follows from our new result on recovering signals with multiple structures (e.g. sparse, low rank), which indicates the significant suboptimality of the common approach of minimizing the sum of individual sparsity inducing norms (e.g. `1, nuclear norm). Our new tractable formulation for low-rank tensor recovery shows how the sample complexity can be reduced by designing convex regularizers that exploit several structures jointly.
منابع مشابه
Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements...
متن کاملOn convex relaxations of quadrilinear terms
The best known method to find exact or at least ε-approximate solutions to polynomial programming problems is the spatial Branch-and-Bound algorithm, which rests on computing lower bounds to the value of the objective function to be minimized on each region that it explores. These lower bounds are often computed by solving convex relaxations of the original program. Although convex envelopes ar...
متن کاملOn the Exponent of Triple Tensor Product of p-Groups
The non-abelian tensor product of groups which has its origins in algebraic K-theory as well as inhomotopy theory, was introduced by Brown and Loday in 1987. Group theoretical aspects of non-abelian tensor products have been studied extensively. In particular, some studies focused on the relationship between the exponent of a group and exponent of its tensor square. On the other hand, com...
متن کاملStrong exponent bounds for the local Rankin-Selberg convolution
Let $F$ be a non-Archimedean locally compact field. Let $sigma$ and $tau$ be finite-dimensional representations of the Weil-Deligne group of $F$. We give strong upper and lower bounds for the Artin and Swan exponents of $sigmaotimestau$ in terms of those of $sigma$ and $tau$. We give a different lower bound in terms of $sigmaotimeschecksigma$ and $tauotimeschecktau$. Using the Langlands...
متن کاملConvex relaxations of chance constrained optimization problems
In this paper we develop convex relaxations of chance constrained optimization problems in order to obtain lower bounds on the optimal value. Unlike existing statistical lower bounding techniques, our approach is designed to provide deterministic lower bounds. We show that a version of the proposed scheme leads to a tractable convex relaxation when the chance constraint function is affine with ...
متن کامل